A Comparison Model of Reinforcement-Learning and Win-Stay-Lose-Shift Decision-Making Processes: A Tribute to W.K. Estes.
نویسندگان
چکیده
W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters, and a comparison between the simple and more complex, augmented model determined whether the additions were justified. Following this same approach we utilized Estes' (1950) own augmented learning equations to improve the fit and plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent work. Estes also championed models that assumed a comparison between multiple concurrent cognitive processes. In line with this, we develop a WSLS-Reinforcement Learning (RL) model that assumes that the output of a WSLS process that provides a probability of staying or switching to a different option based on the last two decision outcomes is compared with the output of an RL process that determines a probability of selecting each option based on a comparison of the expected value of each option. Fits to data from three different decision-making experiments suggest that the augmentations to the WSLS and RL models lead to a better account of decision-making behavior. Our results also support the assertion that human participants weigh the output of WSLS and RL processes during decision-making.
منابع مشابه
Comparison Model 1 A Comparison Model of Reinforcement-Learning and Win-Stay-Lose-Shift Decision- Making Processes: A Tribute to W.K. Estes
W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters to account for additional psychological mechanisms. Following this same approach we utilized Estes’ (1950) own augmented learning equations to improve the plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent wor...
متن کاملHeterogeneity of strategy use in the Iowa gambling task: a comparison of win-stay/lose-shift and reinforcement learning models.
The Iowa gambling task (IGT) has been used in numerous studies, often to examine decision-making performance in different clinical populations. Reinforcement learning (RL) models such as the expectancy valence (EV) model have often been used to characterize choice behavior in this work, and accordingly, parameter differences from these models have been used to examine differences in decision-ma...
متن کاملFast or Rational? A Response-Times Study of Bayesian Updating
We present a simple model for decision making under uncertainty building on dual-process theories from psychology, and use it to illustrate a possible component of intuitive decision making of particular relevance for managerial settings. Decisions are the result of the interaction between two decision processes. The first one captures optimization based on Bayesian updating of beliefs. The sec...
متن کاملPrefrontal, parietal, and temporal cortex networks underlie decision-making in the presence of uncertainty.
Decision-making in the presence of uncertainty, i.e., selecting a sequence of responses in an uncertain environment according to a self-generated plan of action, is a complex activity that involves both cognitive and noncognitive processes. Using functional magnetic resonance imaging, the neural substrates of decision-making in the presence of uncertainty are examined. Normal control subjects s...
متن کاملWin-stay and win-shift lever-press strategies in an appetitively reinforced task for rats
Two experiments examined acquisition of win-stay, win-shift, lose-stay, and lose-shift rules by which hungry rats could earn food reinforcement. In Experiment 1, two groups of rats were trained in a two-lever operant task that required them to follow either a win-stay/lose-shift or a win-shift/lose-stay contingency. The rates of acquisition of the individual rules within each contingency differ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Journal of mathematical psychology
دوره 59 شماره
صفحات -
تاریخ انتشار 2014